77 research outputs found

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    Camera based motion recognition for mobile interaction

    Get PDF
    Multiple built-in cameras and the small size of mobile phones are underexploited assets for creating novel applications that are ideal for pocket size devices, butmay notmakemuch sense with laptops. In this paper we present two vision-basedmethods for the control of mobile user interfaces based on motion tracking and recognition. In the first case the motion is extracted by estimating the movement of the device held in the user’s hand. In the second it is produced from tracking the motion of the user’s finger in front of the device. In both alternatives sequences of motion are classified using Hidden Markov Models. The results of the classification are filtered using a likelihood ratio and the velocity entropy to reject possibly incorrect sequences. Our hypothesis here is that incorrect measurements are characterised by a higher entropy value for their velocity histogram denotingmore random movements by the user. We also show that using the same filtering criteria we can control unsupervised Maximum A Posteriori adaptation. Experiments conducted on a recognition task involving simple control gestures formobile phones clearly demonstrate the potential usage of our approaches and may provide for ingredients for new user interface designs

    Impaired HDL2-mediated cholesterol efflux is associated with metabolic syndrome in families with early onset coronary heart disease and low HDL-cholesterol level

    Get PDF
    <div><p>Objective</p><p>The potential of high-density lipoproteins (HDL) to facilitate cholesterol removal from arterial foam cells is a key function of HDL. We studied whether cholesterol efflux to serum and HDL subfractions is impaired in subjects with early coronary heart disease (CHD) or metabolic syndrome (MetS) in families where a low HDL-cholesterol level (HDL-C) predisposes to early CHD.</p><p>Methods</p><p>HDL subfractions were isolated from plasma by sequential ultracentrifugation. THP-1 macrophages loaded with acetyl-LDL were used in the assay of cholesterol efflux to total HDL, HDL2, HDL3 or serum.</p><p>Results</p><p>While cholesterol efflux to serum, total HDL and HDL3 was unchanged, the efflux to HDL2 was 14% lower in subjects with MetS than in subjects without MetS (p<0.001). The efflux to HDL2 was associated with components of MetS such as plasma HDL-C (r = 0.76 in men and r = 0.56 in women, p<0.001 for both). The efflux to HDL2 was reduced in men with early CHD (p<0.01) only in conjunction with their low HDL-C. The phospholipid content of HDL2 particles was a major correlate with the efflux to HDL2 (r = 0.70, p<0.001). A low ratio of HDL2 to total HDL was associated with MetS (p<0.001).</p><p>Conclusion</p><p>Our results indicate that impaired efflux to HDL2 is a functional feature of the low HDL-C state and MetS in families where these risk factors predispose to early CHD. The efflux to HDL2 related to the phospholipid content of HDL2 particles but the phospholipid content did not account for the impaired efflux in cardiometabolic disease, where a combination of low level and poor quality of HDL2 was observed.</p></div

    Camera based motion estimation and recognition for human-computer interaction

    No full text
    Abstract Communicating with mobile devices has become an unavoidable part of our daily life. Unfortunately, the current user interface designs are mostly taken directly from desktop computers. This has resulted in devices that are sometimes hard to use. Since more processing power and new sensing technologies are already available, there is a possibility to develop systems to communicate through different modalities. This thesis proposes some novel computer vision approaches, including head tracking, object motion analysis and device ego-motion estimation, to allow efficient interaction with mobile devices. For head tracking, two new methods have been developed. The first method detects a face region and facial features by employing skin detection, morphology, and a geometrical face model. The second method, designed especially for mobile use, detects the face and eyes using local texture features. In both cases, Kalman filtering is applied to estimate the 3-D pose of the head. Experiments indicate that the methods introduced can be applied on platforms with limited computational resources. A novel object tracking method is also presented. The idea is to combine Kalman filtering and EM-algorithms to track an object, such as a finger, using motion features. This technique is also applicable when some conventional methods such as colour segmentation and background subtraction cannot be used. In addition, a new feature based camera ego-motion estimation framework is proposed. The method introduced exploits gradient measures for feature selection and feature displacement uncertainty analysis. Experiments with a fixed point implementation testify to the effectiveness of the approach on a camera-equipped mobile phone. The feasibility of the methods developed is demonstrated in three new mobile interface solutions. One of them estimates the ego-motion of the device with respect to the user's face and utilises that information for browsing large documents or bitmaps on small displays. The second solution is to use device or finger motion to recognize simple gestures. In addition to these applications, a novel interactive system to build document panorama images is presented. The motion estimation and recognition techniques presented in this thesis have clear potential to become practical means for interacting with mobile devices. In fact, cameras in future mobile devices may, for the most of time, be used as sensors for self intuitive user interfaces rather than using them for digital photography
    • …
    corecore